USA TODAY AR
This is a long one. If you'd rather see the final products than hear about the process, skip to the next centered headline for a TL;DR.
  The minute I rejoined the Emerging Tech team at USA TODAY, I got to work on 321 LAUNCH. This standalone app took 8 weeks to put together, and earned us over a hundred thousand views in the first month after release, establishing our brand as a force in the augmented reality space.
  An appetite existed for AR news, of that there was no question. But 8-week development cycles and a new app for every experience doesn't make for a scalable system. Instead, my team had to figure out a way to deliver all the features that make AR valuable while minimizing development time and user friction.
  Web, of course, isn't close enough to the metal. It would remove nearly all of our favorite features and choke performance. Instead, we chose the next most accessible platform available: integrate Unity Engine into the USA TODAY app. Having millions upon millions of installs, this would expand my team's reach to unprecedented levels, while still allowing us to develop fast and develop once.
  This wouldn't come without challenges, though. First, we'd have to find a way to integrate seamlessly into the native app. With such a large user base, every change you make to the code base matters, so we had to be extremely sensitive to the app developers' workflows, as well as the app performance and size. Through a seriously grueling summer of development, my team was able to wrap the Unity Engine in a convenient black box. Non-AR users would never see Unity start running, and it added less 30MB to the app's total size.
  From that size, one can easily surmise that we didn't include our actual content in our app. Instead, it was a platform to which we could deploy experiences as they were made without the need to update the app. If breaking news is happening, we can't wait a month until the next version of the app is ready! By default, we had access to all of Unity's rendering, physics, animation, and interactive tools: all we had to do to use them was host an AssetBundle containing the relevant editorial content.
  For the basic AR experience, that might be enough. But we on Emerging Tech didn't make basic AR experiences: we needed deep interactivity. There's the rub: we need code for custom behaviors, but can't run newly-compiled code downloaded outside the app stores on mobile platforms (especially not on iOS). In design, it was frustrating to realize that what would take only one line of code would be impossible to include in a remotely-deployed experience!
  For this purpose I built a custom runtime visual scripting system by extending the engine's UnityEvent, AnimatorController, and custom editor GUI systems. The animator was the key: by tying its state machine behaviors to the rest of the scene, I could perform complex programming without a single line of code, compiled or otherwise.
To avoid code, we determined our a 3D soccer player's behavior within this loop, using pre-built random number generators and collision detection to set Animator parameters.
  Dozens of interactions and controls, simple and complex, that we took for granted had to be specifically integrated into this system so that they could be used from an AssetBundle. Gaze and swipe detection, reassignment of materials and lightmaps, vibration and GPS, realtime data input and subtitles, even billboards: anything that might be needed by a future project was built into NativeComponents.
  From these basic components, we've been able to put out several high-quality AR stories described below. After each one, the common combinations of NativeComponents have been saved into prefab templates to speed up our next project. Our fastest one so far has been a 1-day turnaround. In addition, as we continue to theorize, we improve our SDK with new features. Our work is detailed below, ordered by most recent.
TL;DR: My team and I built a system through which we could develop and remotely deploy complex AR interactives (even a full-on video game) without writing a line of code, all run on a platform half the size of a mobile web browser.
Women's World Cup
  As the United States headed into the Women's World Cup as the clear favorite, we wanted to demonstrate the value of AR in covering sports. I was excited to push the boundaries of my NativeComponents development kit with our first concept: a gamified demonstration of the difficulty of penalty kicks called Make the Save.
  Make the Save first introduced users to the USNWT's starting goalkeeper, Alyssa Naeher, and heard her take on one of the most tense moments in a keeper's game: the Penalty kick. We created a 1-on-1 feel by placing a photogrammetry capture of her in real space and playing an interview I captured in person.
  Once users hear her advice and perspective, they get to the good part: they must physically block virtual penalty kicks by moving their phone around in space. At first this is on a tutorial mode. 25mph, then 40mph. But if they pass those tests, I bump them up to realistic speed: 80mph. This is the editorial crux of the piece: seeing the kicker 12 yards away and feeling the true speed of a pro penalty kick, they understand how near-impossible Alyssa Naeher's task is. This is a perspective you could never get in another digital medium.
  Meet the Team was more traditionally newsy, and was more of a test of my live-update capabilities. We featured a 3D field in AR space, and allowed users to peruse the roster and see where on the field they could expect to find any given player. I used a new NativeComponents update to populate scores, stats, standings, and schedules in real time, allowing users to pop in and see current data at any time without having to rebuild/re-deploy the AssetBundle.
1619: The Will to Survive
   In acknowledging the 400th anniversary of the first enslaved Africans' arrival in what would be the United States, the USA TODAY NETWORK commissioned research and travel to create several stories, bringing in experts of African and African-American culture for writing, narration, and art. My team applied these lessons to create an immersive experience where users would be placed in the same style of boat where over 150 people were cramped during that voyage. Our 3D artist created the scene from shipbuilding plans, while my responsibilities included development, lighting, particle effects, and animation.
Skyscrapers
  This experience was a straightforward concept: give users a gods-eye view of some of the world's tallest buildings. The wow-factor centered around the absurd size difference between the Burj Khalifa and any other building, but we continued to advance our user interface habits using what we learned from Oscars AR. All models were created by our illustrious art-guy Will Austin.
Oscars AR
  Perhaps my favorite piece so far, Oscars AR really demonstrated the utility of photogrammetry to my team and to my company. Users view each costume in very high fidelity, and can tap points of interest on each costume component to hear testimony from our interviews with the designers. We found an average 5 minutes engagement time on launch day, an achievement we credit to our gallery-based, rather than linear, experience format. Read more about the creation of the models here.
Notre Dame
  Notre Dame is unique in it's turnaround time. From the time my team got the go-ahead to make it to the time is was ready for publish, it took about 7 hours. The main goal was to give users who may have never seen more than picture of the front of the cathedral a full view and identify what was at risk and what had been saved. The model used here was licensed from the web.
The City
  The City is an animated AR movie that provides a new perspective on a USA TODAY podcast of the same name. It shows the growth of a 6-story pile of illegally dumped construction waste scaled relative to the poor neighborhood where it existed in the early 90s: right across from an elementary school. It teases the podcast by sharing the stories of those whose health and property were devastated by the negligence, and links out to encourage users to listen to the whole thing. William Austin created the art, while I did the animation and scripting.
Other Projects
  A lite version of 321 Launch's educational segment:
A live updating tracker showing the path and intensity of the then-active Hurricane Florence:
A photogrammetry gallery and interview of John Carlson of the Washington Capitals:
This video does not belong to me. 
Click through to Youtube to see original publisher.
USA TODAY AR
Published: